Goto

Collaborating Authors

 Sheridan County


Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI

arXiv.org Artificial Intelligence

Remote sensing (RS) applications in the space domain demand machine learning (ML) models that are reliable, robust, and quality-assured, making red teaming a vital approach for identifying and exposing potential flaws and biases. Since both fields advance independently, there is a notable gap in integrating red teaming strategies into RS. This paper introduces a methodology for examining ML models operating on hyperspectral images within the HYPERVIEW challenge, focusing on soil parameters' estimation. We use post-hoc explanation methods from the Explainable AI (XAI) domain to critically assess the best performing model that won the HYPERVIEW challenge and served as an inspiration for the model deployed on board the INTUITION-1 hyperspectral mission. Our approach effectively red teams the model by pinpointing and validating key shortcomings, constructing a model that achieves comparable performance using just 1% of the input features and a mere up to 5% performance loss. Additionally, we propose a novel way of visualizing explanations that integrate domain-specific information about hyperspectral bands (wavelengths) and data transformations to better suit interpreting models for hyperspectral image analysis.


ChatCell: Facilitating Single-Cell Analysis with Natural Language

arXiv.org Artificial Intelligence

As Large Language Models (LLMs) rapidly evolve, their influence in science is becoming increasingly prominent. The emerging capabilities of LLMs in task generalization and free-form dialogue can significantly advance fields like chemistry and biology. However, the field of single-cell biology, which forms the foundational building blocks of living organisms, still faces several challenges. High knowledge barriers and limited scalability in current methods restrict the full exploitation of LLMs in mastering single-cell data, impeding direct accessibility and rapid iteration. To this end, we introduce ChatCell, which signifies a paradigm shift by facilitating single-cell analysis with natural language. Leveraging vocabulary adaptation and unified sequence generation, ChatCell has acquired profound expertise in single-cell biology and the capability to accommodate a diverse range of analysis tasks. Extensive experiments further demonstrate ChatCell's robust performance and potential to deepen single-cell insights, paving the way for more accessible and intuitive exploration in this pivotal field. Our project homepage is available at https://zjunlp.github.io/project/ChatCell.


AI Revolution on Chat Bot: Evidence from a Randomized Controlled Experiment

arXiv.org Artificial Intelligence

In recent years, generative AI has undergone major advancements, demonstrating significant promise in augmenting human productivity. Notably, large language models (LLM), with ChatGPT-4 as an example, have drawn considerable attention. Numerous articles have examined the impact of LLM-based tools on human productivity in lab settings and designed tasks or in observational studies. Despite recent advances, field experiments applying LLM-based tools in realistic settings are limited. This paper presents the findings of a field randomized controlled trial assessing the effectiveness of LLM-based tools in providing unmonitored support services for information retrieval.


Arrival Time Prediction for Autonomous Shuttle Services in the Real World: Evidence from Five Cities

arXiv.org Artificial Intelligence

Urban mobility is on the cusp of transformation with the emergence of shared, connected, and cooperative automated vehicles. Yet, for them to be accepted by customers, trust in their punctuality is vital. Many pilot initiatives operate without a fixed schedule, thus enhancing the importance of reliable arrival time (AT) predictions. This study presents an AT prediction system for autonomous shuttles, utilizing separate models for dwell and running time predictions, validated on real-world data from five cities. Alongside established methods such as XGBoost, we explore the benefits of integrating spatial data using graph neural networks (GNN). To accurately handle the case of a shuttle bypassing a stop, we propose a hierarchical model combining a random forest classifier and a GNN. The results for the final AT prediction are promising, showing low errors even when predicting several stops ahead. Yet, no single model emerges as universally superior, and we provide insights into the characteristics of pilot sites that influence the model selection process. Finally, we identify dwell time prediction as the key determinant in overall AT prediction accuracy when autonomous shuttles are deployed in low-traffic areas or under regulatory speed limits. This research provides insights into the current state of autonomous public transport prediction models and paves the way for more data-informed decision-making as the field advances.


Pentagon Names Chief Digital and Artificial Intelligence Officer

#artificialintelligence

Dr. Craig Martell will serve as the Defense Department's new chief digital and artificial intelligence officer. Martell, who most recently served as the head of machine learning at Lyft after AI and machine learning-related positions with LinkedIn and Dropbox, will now serve as the Pentagon's senior official responsible for the "adoption of data, analytics, digital solutions and AI functions," according to a Pentagon press statement. "Advances in AI and machine learning are critical to delivering the capabilities we need to address key challenges both today and into the future," Deputy Secretary of Defense Kathleen H. Hicks said in a statement. "With Craig's appointment, we hope to see the department increase the speed at which we develop and field advances in AI, data analytics, and machine-learning technology. He brings cutting-edge industry experience to apply to our unique mission set."


BrainChip Success in 2020 Advances Fields of on-Chip Learning

#artificialintelligence

BrainChip Holdings Ltd., a leading provider of ultra-low power, high-performance AI technology, ended the 2020 calendar year having made significant strides in the development of its technology backed by the launch of its Early Access Program (EAP), availability of Akida evaluation boards, new partnerships, and expansion of its executive leadership and global facilities. "This past year saw significant progress in the development of the Akida technology in terms of both market readiness and the increase in market possibilities that the solution will provide immediate impact in" The Company's EAP was launched in June targeting specific customers in a diverse set of end markets in order to ensure availability of initial devices and evaluation systems for key applications. Multiple customers have committed to the advanced purchase of evaluation systems for a range of strategic Edge applications including Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles (AV), Unmanned Aerial Vehicles (UAV), Edge vision systems and factory automation. Among those joining the EAP include VORAGO Technologies in a collaboration intended to support a Phase I NASA program for a neuromorphic processor that meets spaceflight requirements. BrainChip is also collaborating with Tier-1 Automotive Supplier Valeo Corporation to develop neural network processing solutions for ADAS and AV.


López de Prado on machine learning in finance « Mathematical Investor

#artificialintelligence

Marcos López de Prado, whom we have featured in previous Math Scholar articles (see Article A, Article B and Article C), has been invited to present a keynote presentation at the ACM Conference on Artificial Intelligence in Finance, to be conducted virtually October 14-16, 2020. López de Prado is a faculty member of Cornell University and also CEO of True Positive Technologies, LP, a private firm that provides machine learning techniques techniques for finance applications. He is also the author of two books in the field: Advances in Financial Machine Learning, published by Wiley (2018) and Machine Learning for Asset Managers, published by Cambridge University Press (2020). López de Prado has graciously provided the viewgraph file for the talk he is scheduled to present at the ACM Conference on AI in Finance: Viewgraph file. For full details, see López de Prado's viewgraphs at the above link.


The prospects of quantum computing in computational molecular biology

arXiv.org Machine Learning

Quantum computers can in principle solve certain problems exponentially more quickly than their classical counterparts. We have not yet reached the advent of useful quantum computation, but when we do, it will affect nearly all scientific disciplines. In this review, we examine how current quantum algorithms could revolutionize computational biology and bioinformatics. There are potential benefits across the entire field, from the ability to process vast amounts of information and run machine learning algorithms far more efficiently, to algorithms for quantum simulation that are poised to improve computational calculations in drug discovery, to quantum algorithms for optimization that may advance fields from protein structure prediction to network analysis. However, these exciting prospects are susceptible to "hype", and it is also important to recognize the caveats and challenges in this new technology. Our aim is to introduce the promise and limitations of emerging quantum computing technologies in the areas of computational molecular biology and bioinformatics.


Teaching robots to see -- and understand

#artificialintelligence

Machine vision is a crucial missing link holding back the robotization of industries like manufacturing and shipping. But even as that field advances rapidly, there's a larger hurdle that still blocks widespread automation -- machine understanding. Why it matters: Up against a shortage of workers, those sectors stand to benefit hugely from automation. But the people working in warehouses and factories could find their jobs changed or eliminated if vision technology sees new breakthroughs.


Artificial Intelligence Research Needs Responsible Publication Norms

#artificialintelligence

After nearly a year of suspense and controversy, any day now the team of artificial intelligence (AI) researchers at OpenAI will release the full and final version of GPT-2, a language model that can "generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization--all without task-specific training." When OpenAI first unveiled the program in February, it was capable of impressive feats: Given a two-sentence prompt about unicorns living in the Andes Mountains, for example, the program produced a coherent nine-paragraph news article. At the time, the technical achievement was newsworthy--but it was how OpenAI chose to release the new technology that really caused a firestorm. There is a prevailing norm of openness in the machine learning research community, consciously created by early giants in the field: Advances are expected to be shared, so that they can be evaluated and so that the entire field advances. However, in February, OpenAI opted for a more limited release due to concerns that the program could be used to generate misleading news articles; impersonate people online; or automate the production of abusive, fake or spam content.